-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Sentry instrumentation for performance and error tracking #470
Conversation
This update add optional Sentry integration for performance tracking and error monitoring. Key changes include: - Add conditional Sentry import and initialization check - Implement Sentry spans in FrameProcessorMetrics to measure TTFB (Time To First Byte) and processing time when Sentry is available - Maintain existing metrics functionality with MetricsFrame regardless of Sentry availability
This commit enhances the DeepgramSTTService class to enable metrics generation for use with Sentry. Key changes include: 1. Enable general metrics generation: - Implement `can_generate_metrics` method, returning True when VAD is enabled - This allows metrics to be collected and used by both Sentry and the metrics system in frame_processor.py 2. Integrate Sentry-compatible performance tracking: - Add start_ttfb_metrics and start_processing_metrics calls in the VAD speech detection handler - Implement stop_ttfb_metrics call when receiving transcripts - Add stop_processing_metrics for final transcripts 3. Enhance VAD support for metrics: - Add `vad_enabled` property to check VAD event availability - Implement VAD-based speech detection handler for precise metric timing These changes enable detailed performance tracking via both Sentry and the general metrics system when VAD is active. This allows for better monitoring and analysis of the speech-to-text process, providing valuable insights through Sentry and any other metrics consumers in the pipeline.
This is fantastic. Thank you! I was thinking that in the future we might want to use something different than Sentry. What if we create a
|
Thank you for your feedback @aconchillo ! I really like your suggestion. It's definitely a more modular and scalable approach. I'll start working on this asap! |
- Modified the __init__ method to accept a metrics parameter that is either FrameProcessorMetrics or one of its subclasses - Updated the metrics initialization to create an instance with the processor's name - Moved all FrameProcessorMetrics-related logic to a new processors\metrics\base.py file
1. Created a new metrics module in processors/metrics/ 2. Implemented FrameProcessorMetrics base class in base.py: 3. Implemented SentryMetrics class in sentry.py: - Inherits from FrameProcessorMetrics - Integrates with Sentry SDK for advanced metrics tracking - Implements Sentry-specific span creation and management for TTFB and processing metrics - Handles cases where Sentry is not available or initialized
@aconchillo I've updated the PR to implement a flexible metrics system as you proposed. Here are the key changes:
Now, you can choose the metrics implementation for each service. For example : from pipecat.processors.metrics.sentry import SentryMetrics
tts = CartesiaTTSService(
api_key=os.getenv("CARTESIA_API_KEY"),
model_id="sonic-multilingual",
voice_id="65b25c5d-ff07-4687-a04c-da2f43ef6fa9",
language="fr",
metrics=SentryMetrics
) If the |
class FrameProcessor: | ||
|
||
def __init__( | ||
self, | ||
*, | ||
name: str | None = None, | ||
metrics: FrameProcessorMetrics = FrameProcessorMetrics, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be FrameProcessorMetrics()
instead of passing a class. It's possible that other metrics implementation need additional arguments (api keys, user name, etc). If we hide the construction we would not be able to do that.
What we can do (since we need the name) is add a FrameProcessorMetrics.set_processor_name()
:
self._metrics = metrics
metrics.set_processor_name(self.name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With metrics: FrameProcessorMetrics = FrameProcessorMetrics()
, the FrameProcessorMetrics()
instance is created once when the FrameProcessor
class is defined. As a result, every time a new FrameProcessor is created, it uses the same instance of FrameProcessorMetrics, and it's a problem. My idea was to use metrics: FrameProcessorMetrics | None = None
and
self._metrics = metrics or FrameProcessorMetrics()
self._metrics.set_processor_name(self.name)
to create a new instance of FrameProcessorMetrics
inside the constructor if metrics is not provided. Let me know if i'm missing something, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes! You are totally right 😅 . Well, the important point was for people to be able to create instances from outside which we now can do! 😄
Looks great! There was a bigger metrics PR that got merged yesterday so unfortunately you will need to rebase your changes. Sorry :-(. Btw, I think instead of |
done ✅ |
import sentry_sdk | ||
sentry_available = sentry_sdk.is_initialized() | ||
if not sentry_available: | ||
logger.debug("Sentry SDK not initialized. Sentry features will be disabled.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems this should be a warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done ✅
logger.debug("Sentry SDK not initialized. Sentry features will be disabled.") | ||
except ImportError: | ||
sentry_available = False | ||
logger.debug("Sentry SDK not installed. Sentry features will be disabled.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done ✅
LGTM! I added a couple of minor comments. Before I merge, do you mind fixing linting issues? Just run autopep8. We will be moving to Ruff shortly, but for now that's what we are checking. |
Fixed! Thanks for the help @aconchillo |
Merging. Thank you! |
thank you @aconchillo ! |
I just saw... not sure. I'll take a look and fix. Btw, I'll enable |
You are right, |
It's because of this. c7ff79a But I just merged the Ruff PR. |
…470) * feat: Add Sentry support in FrameProcessor This update add optional Sentry integration for performance tracking and error monitoring. Key changes include: - Add conditional Sentry import and initialization check - Implement Sentry spans in FrameProcessorMetrics to measure TTFB (Time To First Byte) and processing time when Sentry is available - Maintain existing metrics functionality with MetricsFrame regardless of Sentry availability * feat: Enable metrics in DeepgramSTTService for Sentry This commit enhances the DeepgramSTTService class to enable metrics generation for use with Sentry. Key changes include: 1. Enable general metrics generation: - Implement `can_generate_metrics` method, returning True when VAD is enabled - This allows metrics to be collected and used by both Sentry and the metrics system in frame_processor.py 2. Integrate Sentry-compatible performance tracking: - Add start_ttfb_metrics and start_processing_metrics calls in the VAD speech detection handler - Implement stop_ttfb_metrics call when receiving transcripts - Add stop_processing_metrics for final transcripts 3. Enhance VAD support for metrics: - Add `vad_enabled` property to check VAD event availability - Implement VAD-based speech detection handler for precise metric timing These changes enable detailed performance tracking via both Sentry and the general metrics system when VAD is active. This allows for better monitoring and analysis of the speech-to-text process, providing valuable insights through Sentry and any other metrics consumers in the pipeline. * Update frame_processor.py * Refactor to support flexible metrics implementation - Modified the __init__ method to accept a metrics parameter that is either FrameProcessorMetrics or one of its subclasses - Updated the metrics initialization to create an instance with the processor's name - Moved all FrameProcessorMetrics-related logic to a new processors\metrics\base.py file * Implement flexible metrics system with Sentry integration 1. Created a new metrics module in processors/metrics/ 2. Implemented FrameProcessorMetrics base class in base.py: 3. Implemented SentryMetrics class in sentry.py: - Inherits from FrameProcessorMetrics - Integrates with Sentry SDK for advanced metrics tracking - Implements Sentry-specific span creation and management for TTFB and processing metrics - Handles cases where Sentry is not available or initialized
hey, I have read the doc about how to use sentry in pipecat, but I am still confused, can you share more examples about that? |
it worked on tts, but didn't work on llm |
This Pull Request introduces Sentry integration for performance tracking and error monitoring.
Key changes:
DeepgramSTTService:
Enable general metrics generation:
can_generate_metrics
method, returning True when VAD is enabledIntegrate Sentry-compatible performance tracking:
Enhance VAD support for metrics:
vad_enabled
property to check VAD event availabilityThese changes allow for comprehensive performance monitoring and analysis using Sentry, providing valuable insights into STT, LLM and TTS services. The integration is designed to be non-intrusive, maintaining existing functionality while adding custom performance instrumentation.
Testing:
To validate the Sentry integration, the following steps were taken:
examples/simple-chatbot/demo.py
:The
vad_events
parameter in the DeepgramSTTService configuration was set toTrue
The chatbot was run with this configuration, and the results were observed in the Sentry dashboard.
A screenshot of the Sentry performance monitoring results, demonstrating successful tracking of STT, LLM, and TTS services performance metrics :
These tests confirm that Sentry is correctly integrated and capable of monitoring both performance and errors across the pipeline components.
Notes:
It's crucial to initialize Sentry as early as possible in the application, and before
from pipecat.processors.frame_processor
:sentry_sdk.is_initialized()
returns the correct state when called inframe_processor.py
.Future Improvements:
Usage Metrics in Sentry Spans:
I plan to enhance the Sentry integration by adding "usage_metrics" to Sentry spans as tags.
This improvement will provide more detailed and contextual information about resource usage within each service, allowing for better analysis and optimization of our pipeline components.
Review TTS Processing Time Measurement:
I need to review and potentially refine the way we measure processing time for the Text-to-Speech (TTS) service.